Current Issue : October-December Volume : 2023 Issue Number : 4 Articles : 5 Articles
Software defect prediction (SDP) is designed to assist software testing, which can reasonably allocate test resources to reduce costs and improve development efficiency. In order to improve the prediction performance, researchers have designed many defect-related features for SDP. However, feature redundancy (FR) and irrelevance caused by the increasing dimensions of data will greatly degrade the performance of defect prediction. In order to solve the problems, researchers have proposed various data dimensionality reduction methods. These methods can be simply divided into two categories of methods: feature selection and feature extraction. However, the two categories of methods have their own advantages and limitation. In this paper, we propose a Hybrid Feature Dimensionality Reduction Approach (HFDRA) for SDP, which combines the two different kinds of methods, to improve the performance of SDP. HFDRA approach can be divided into two stages: feature selection and feature extraction. First, HFDRA divides the original features into several feature subsets through a clustering algorithm in the feature selection stage. Then, in the feature extraction stage, kernel principal component analysis (KPCA) is used to reduce the dimensionality of each feature subset. Finally, the reduced-dimensional data is used to build the prediction model. In the empirical study, we use 22 projects from AEEEM, SOFTLAB, MORP, and ReLink as experiment object. In this paper, we first compare our approach with seven baseline methods and three state-of-the-art methods. Then, we analyze the relationship between FR and prediction performance. Experiment results show that our approach outperforms the state-of-the-art data dimensionality reduction methods for defect prediction....
Good teaching effect comes from effective teaching design. In this article, we combined the advanced teaching concept BOPPPS model with Tina virtual simulation software to develop the teaching design. BOPPPS model is an effective and efficient teaching model. It includes six parts such as bridge-in, objective, preassessment, participatory learning, postassessment, and summary. In this article, bridge-in is introduced by practical examples of triode amplifier circuits. Objective includes knowledge objective, ability objective, and value objective. Preassessment is realized by simulating the triode output characteristic. Participatory learning is presented by simulating three kinds of basic amplifier circuits and analyzing simulation results. Besides that, flipping classroom is designed to stimulate students’ learning enthusiasm and innovation ability. Postassessment is completed by asking some questions. Summary is completed by students and supplemented by teachers. In this process, different simulation waveforms are obtained by using the Tina virtual software to simulate the various circuits layer by layer. Practice has shown that the proposed method not only improves students’ ability of analyzing and designing practical circuits, but also stimulates students’ learning enthusiasm. Teaching design ideas become clearer, and the teaching quality is improved....
This paper discusses Python SystemVerilog (Python SV), a simulation-based verification approach leveraging the power of Python and SystemVerilog. The use of Python-implemented UVM classes in SystemVerilog enables users to write less code, minimize errors and reduce the verification time. This paper evaluates the use of Python SV in the verification of digital designs, its benefits, limitations, and future prospects. Python-SystemVerilog (Python-SV) is a research area that investigates the feasibility of building a high-level verification environment using Python and SystemVerilog. Python-SV aims to provide a unified framework for the design, simulation, and verification of digital systems, with an emphasis on ease of use and productivity. SystemVerilog is a hardware description and verification language that is widely used for designing digital systems. On the other hand, Python is a powerful, high-level programming language that is widely used in various fields, including software engineering, scientific computing, and data analysis. Python’s popularity has grown in recent years, primarily due to its simplicity, ease of use, and wide range of libraries and frameworks. Python-SV research primarily focuses on the following areas: 1) Integration of Python and SystemVerilog: Python-SV aims to seamlessly integrate SystemVerilog and Python, allowing designers to write test benches and verification code in Python and interface them with SystemVerilog modules. This integration simplifies the development process, making it easier to write and maintain large and complex verification environments. 2) Development of Python libraries for verification: Python-SV research focuses on developing Python libraries specifically for digital system verification. These libraries provide a higher-level interface for writing test benches and other functions, such as analysis and visualization of simulation results. 3) Implementation of verification methodologies: Python-SV research investigates the implementation of various industry- standard verification methodologies, such as the Universal Verification Methodology (UVM), in Python. This implementation aims to enable designers to use Python to develop and simulate UVM-compliant test benches. 4) Development of simulation tools: Python-SV also explores the development of simulation tools that extend the capabilities of traditional SystemVerilog simulators. These tools leverage the capabilities of Python for complex data analysis and visualization and provide a more intuitive and user- friendly interface for working with simulation results. Overall, Python-SV research aims to bring the benefits of Python to the world of digital system verification, enabling designers to build more efficient, productive, and flexible verification environments....
The pandemic period has made remote work a reality in many organizations. Despite the possible negative aspects of this form of work, many employers and employees appreciate its flexibility and effectiveness. Therefore, employers are looking for the most optimal tools to support this form of work. However, this may be difficult due to their complexity, different functionality, or different conditions of the company’s operations. Decisions on the choice of a given solution are usually made in a group of decision makers. Often their subjective assessments differ from each other, making it even more difficult to make a decision. The aim of this article is to propose a methodological solution supporting the assessment of the most popular teleconferencing systems and generating their ranking. The feature of this solutions is the combination of two important methodological aspects facilitating the selection process. The first one concerns the possibility of taking into account quantitative and qualitative criteria expressed linguistically and of an uncertain nature in the assessment (NEAT F-PROMETHEE method). The second one is related to the possibility of taking into account the assessments of many experts, including the consensus study between them (PROSA GDSS method). The use of these combined methods to assess teleconferencing platforms made it possible to create their ranking and indicate the solution that best meets the adopted criteria (based on experts’ opinions). The Microsoft Teams system turned out to be this solution, whose functionality, usability, multi-platform aspect and other elements turned out to be crucial in the context of the overall assessment. The results obtained may be a guideline for managers and decision makers facing the choice of a tool supporting remote work....
Over the past decade, open-source software use has grown. Today, many companies including Google, Microsoft, Meta, RedHat, MongoDB, and Apache are major participants of open-source contributions. With the increased use of open-source software or integration of open-source software into customdeveloped software, the quality of this software component increases in importance. This study examined a sample of open-source applications from GitHub. Static software analytics were conducted, and each application was classified for its risk level. In the analyzed applications, it was found that 90% of the applications were classified as low risk or moderate low risk indicating a high level of quality for open-source applications....
Loading....